20 research outputs found

    Dynamic Pricing and Learning: Historical Origins, Current Research, and New Directions

    Full text link

    Continuous-state reinforcement learning with fuzzy approximation

    Full text link
    peer reviewedReinforcement learning (RL) is a widely used learning paradigm for adaptive agents. Well-understood RL algorithms with good convergence and consistency properties exist. In their original form, these algorithms require that the environment states and agent actions take values in a relatively small discrete set. Fuzzy representations for approximate, model-free RL have been proposed in the literature for the more difficult case where the state-action space is continuous. In this work, we propose a fuzzy approximation structure similar to those previously used for Q-learning, but we combine it with the model-based Q-value iteration algorithm. We show that the resulting algorithm converges. We also give a modif ed, serial variant of the algorithm that converges at least as fast as the original version. An illustrative simulation example is provided

    Reinforcement learning method for ad networks ordering in real-time bidding

    No full text
    High turnover of online advertising and especially real time bidding makes this ad market very attractive to beneficiary stakeholders. For publishers, it is as easy as placing some slots in their webpages and sell these slots in the available online auctions. It is important to determine which online auction market to send their slots to. Based on the traditional Waterfall Strategy, publishers have a fixed ordering of preferred online auction markets, and sell the ad slots by trying these markets sequentially. This fixed-order strategy replies heavily on the experience of publishers, and often it does not provide highest revenue. In this paper, we propose a method for dynamically deciding on the ordering of auction markets for each available ad slot. This method is based on reinforcement learning (RL) and learns the state-action through a tabular method. Since the state-action space is sparse, a prediction model is used to solve this sparsity. We analyze a real-time bidding dataset, and then show that the proposed RL method on this dataset leads to higher revenues. In addition, a sensitivity analysis is performed on the parameters of the method

    DABS-Storm: A Data-Aware Approach for Elastic Stream Processing

    No full text
    International audienceIn the last decade, stream processing has become a very active research domain motivated by the growing number of stream-based applications. These applications make use of continuous queries, which are processed by a stream processing engine (SPE) to generate timely results given the ephemeral input data. Variations of input data streams, in terms of both volume and distribution of values, have a large impact on computational resource requirements. Dynamic and Automatic Balanced Scaling for Storm (DABS-Storm) is an original solution for handling dynamic adaptation of continuous queries processing according to evolution of input stream properties, while controlling the system stability. Both fluctuations in data volume and distribution of values within data streams are handled by DABS-Storm to adjust the resources usage that best meets processing needs. To achieve this goal, the DABS-Storm holistic approach combines a proactive auto-parallelization algorithm with a latency-aware load balancing strategy

    An Approach Based on Bayesian Networks for Query Selectivity Estimation

    Get PDF
    International audienceThe efficiency of a query execution plan depends on the accuracy of the selectivity estimates given to the query optimiser by the cost model. The cost model makes simplifying assumptions in order to produce said estimates in a timely manner. These assumptions lead to selectivity estimation errors that have dramatic effects on the quality of the resulting query execution plans. A convenient assumption that is ubiquitous among current cost models is to assume that attributes are independent with each other. However, it ignores potential correlations which can have a huge negative impact on the accuracy of the cost model. In this paper we attempt to relax the attribute value independence assumption without unreasonably deteriorating the accuracy of the cost model. We propose a novel approach based on a particular type of Bayesian networks called Chow-Liu trees to approximate the distribution of attribute values inside each relation of a database. Our results on the TPC-DS benchmark show that our method is an order of magnitude more precise than other approaches whilst remaining reasonably efficient in terms of time and space

    Selectivity Estimation with Attribute Value Dependencies Using Linked Bayesian Networks

    No full text
    ISBN 978-3-662-62385-5International audienceRelational query optimisers rely on cost models to choose between different query execution plans. Selectivity estimates are known to be a crucial input to the cost model. In practice, standard selectivity estimation procedures are prone to large errors. This is mostly because they rely on the so-called attribute value independence and join uniformity assumptions. Therefore, multidimensional methods have been proposed to capture dependencies between two or more attributes both within and across relations. However, these methods require a large computational cost which makes them unusable in practice. We propose a method based on Bayesian networks that is able to capture cross-relation attribute value dependencies with little overhead. Our proposal is based on the assumption that dependencies between attributes are preserved when joins are involved. Furthermore, we introduce a parameter for trading between estimation accuracy and computational cost. We validate our work by comparing it with other relevant methods on a large workload derived from the JOB and TPC-DS benchmarks. Our results show that our method is an order of magnitude more efficient than existing methods, whilst maintaining a high level of accuracy
    corecore